Convergence Properties of Kronecker Graphical Lasso Algorithms
نویسنده
چکیده
This report presents a thorough convergence analysis of Kronecker graphical lasso (KGLasso) algorithms for estimating the covariance of an i.i.d. Gaussian random sample under a sparse Kronecker-product covariance model. The KGlasso model, originally called the transposable regularized covariance model by Allen et al [1], implements a pair of `1 penalties on each Kronecker factor to enforce sparsity in the covariance estimator. The KGlasso algorithm generalizes Glasso, introduced by Yuan and Lin [2] and Banerjee et al [3], to estimate covariances having Kronecker product form. It also generalizes the unpenalized ML flip-flop (FF) algorithm of Dutilleul [4] and Werner et al [5] to estimation of sparse Kronecker factors. We establish that the KGlasso iterates converge pointwise to a local maximum of the penalized likelihood function. We derive high dimensional rates of convergence to the true covariance as both the number of samples and the number of variables go to infinity. Our results establish that KGlasso has significantly faster asymptotic convergence than FF and Glasso. Our results establish that KGlasso has significantly faster asymptotic convergence than FF and Glasso. Simulations are presented that validate the results of our analysis. For example, for a sparse 10, 000 × 10, 000 covariance matrix equal to the Kronecker product of two 100 × 100 matrices, the root mean squared error of the inverse covariance estimate using FF is 3.5 times larger than that obtainable using KGlasso.
منابع مشابه
Rigorous Analysis of Kronecker Graph Algorithms
Real world graphs have been observed to display a number of surprising properties. These properties include heavy-tails for inand out-degree distributions, small diameters, and a densification law [3]. These features do not arise from the classical Erdos-Renyi random graph model [2]. To address these difficulties, Kronecker Graphs were first introduced in [3] as a new method of generating graph...
متن کاملThe Graphical Lasso: New Insights and Alternatives
The graphical lasso [5] is an algorithm for learning the structure in an undirected Gaussian graphical model, using ℓ1 regularization to control the number of zeros in the precision matrix Θ = Σ-1 [2, 11]. The R package GLASSO [5] is popular, fast, and allows one to efficiently build a path of models for different values of the tuning parameter. Convergence of GLASSO can be tricky; the converge...
متن کاملEfficient inference in matrix-variate Gaussian models with \iid observation noise
Inference in matrix-variate Gaussian models has major applications for multioutput prediction and joint learning of row and column covariances from matrixvariate data. Here, we discuss an approach for efficient inference in such models that explicitly account for iid observation noise. Computational tractability can be retained by exploiting the Kronecker product between row and column covarian...
متن کاملApplications of the lasso and grouped lasso to the estimation of sparse graphical models
We propose several methods for estimating edge-sparse and nodesparse graphical models based on lasso and grouped lasso penalties. We develop efficient algorithms for fitting these models when the numbers of nodes and potential edges are large. We compare them to competing methods including the graphical lasso and SPACE (Peng, Wang, Zhou & Zhu 2008). Surprisingly, we find that for edge selection...
متن کاملSVD-Based Screening for the Graphical Lasso
The graphical lasso is the most popular approach to estimating the inverse covariance matrix of highdimension data. It iteratively estimates each row and column of the matrix in a round-robin style until convergence. However, the graphical lasso is infeasible due to its high computation cost for large size of datasets. This paper proposes Sting, a fast approach to the graphical lasso. In order ...
متن کامل